160 research outputs found

    The dorsal visual stream revisited: Stable circuits or dynamic pathways?

    Get PDF
    In both macaque and human brain, information regarding visual motion flows from the extrastriate area V6 along two different paths: a dorsolateral one towards areas MT/V5, MST, V3A, and a dorsomedial one towards the visuomotor areas of the superior parietal lobule (V6A, MIP, VIP). The dorsolateral visual stream is involved in many aspects of visual motion analysis, including the recognition of object motion and self motion. The dorsomedial stream uses visual motion information to continuously monitor the spatial location of objects while we are looking and/or moving around, to allow skilled reaching for and grasping of the objects in structured, dynamically changing environments. Grasping activity is present in two areas of the dorsal stream, AIP and V6A. Area AIP is more involved than V6A in object recognition, V6A in encoding vision for action. We suggest that V6A is involved in the fast control of prehension and plays a critical role in biomechanically selecting appropriate postures during reach to grasp behaviors.In everyday life, numerous functional networks, often involving the same cortical areas, are continuously in action in the dorsal visual stream, with each network dynamically activated or inhibited according to the context. The dorsolateral and dorsomedial streams represent only two examples of these networks. Many others streams have been described in the literature, but it is worthwhile noting that the same cortical area, and even the same neurons within an area, are not specific for just one functional property, being part of networks that encode multiple functional aspects. Our proposal is to conceive the cortical streams not as fixed series of interconnected cortical areas in which each area belongs univocally to one stream and is strictly involved in only one function, but as interconnected neuronal networks, often involving the same neurons, that are involved in a number of functional processes and whose activation changes dynamically according to the context

    The posterior parietal area V6A: an attentionally-modulated visuomotor region involved in the control of reach-to-grasp action

    Get PDF
    In the macaque, the posterior parietal area V6A is involved in the control of all phases of reach-to-grasp actions: the transport phase, given that reaching neurons are sensitive to the direction and amplitude of arm movement, and the grasping phase, since reaching neurons are also sensitive to wrist orientation and hand shaping. Reaching and grasping activity are corollary discharges which, together with the somatosensory and visual signals related to the same movement, allow V6A to act as a state estimator that signals discrepancies during the motor act in order to maintain consistency between the ongoing movement and the desired one. Area V6A is also able to encode the target of an action because of gaze-dependent visual neurons and real-position cells. Here, we advance the hypothesis that V6A also uses the spotlight of attention to guide goal-directed movements of the hand, and hosts a priority map that is specific for the guidance of reaching arm movement, combining bottom-up inputs such as visual responses with top-down signals such as reaching plans

    Reaching and grasping actions and their context shape the perception of object size

    Get PDF
    Humans frequently estimate the size of objects to grasp them. In fact, when performing an action, our perception is focused towards the visual properties of the object that enable us to successfully execute the action. However, the motor system is also able to influence perception, but only a few studies have reported evidence for action-induced visual perception modifications. Here, we aimed to look for a feature-specific perceptual modulation before and after a reaching or a grasping action. Human participants were instructed to either reach for or grasp two-dimensional bars of different size and to perform a size perceptual task before and after the action in two contexts: in one where they knew the subsequent type of movement and in the other where they did not know. We found significant modifications of perceived size of stimuli more pronounced after grasping than after reaching. The mere knowledge of the subsequent action type significantly affected the size perception before the movement execution, with consistent results in both manual and verbal reports. These data represent direct evidence that, in natural conditions without manipulation of visual information, the action type and the action context dynamically modulate size perception, by shaping it according to relevant information required to recognize and interact with objects

    Multiple coordinate systems and motor strategies for reaching movements when eye and hand are dissociated in depth and direction

    Get PDF
    Reaching behavior represents one of the basic aspects of human cognitive abilities important for the interaction with the environment. Reaching movements towards visual objects are controlled by mechanisms based on coordinate systems that transform the spatial information of target location into appropriate motor response. Although recent works have extensively studied the encoding of target position for reaching in three-dimensional space at behavioral level, the combined analysis of reach errors and movement variability has so far been investigated by few studies. Here we did so by testing 12 healthy participants in an experiment where reaching targets were presented at different depths and directions in foveal and peripheral viewing conditions. Each participant executed a memory-guided task in which he/she had to reach the memorized position of the target. A combination of vector and gradient analysis, novel for behavioral data, was applied to analyze patterns of reach errors for different combinations of eye/target positions. The results showed reach error patterns based on both eye- and space-centered coordinate systems: in depth more biased towards a space-centered representation and in direction mixed between space- and eye-centered representation. We calculated movement variability to describe different trajectory strategies adopted by participants while reaching to the different eye/target configurations tested. In direction, the distribution of variability between configurations that shared the same eye/target relative configuration was different, whereas in configurations that shared the same spatial position of targets, it was similar. In depth, the variability showed more similar distributions in both pairs of eye/target configurations tested. These results suggest that reaching movements executed in geometries that require hand and eye dissociations in direction and depth showed multiple coordinate systems and different trajectory strategies according to eye/target configurations and the two dimensions of space

    Functional organization of the caudal part of the human superior parietal lobule

    Get PDF
    : Like in macaque, the caudal portion of the human superior parietal lobule (SPL) plays a key role in a series of perceptive, visuomotor and somatosensory processes. Here, we review the functional properties of three separate portions of the caudal SPL, i.e., the posterior parieto-occipital sulcus (POs), the anterior POs, and the anterior part of the caudal SPL. We propose that the posterior POs is mainly dedicated to the analysis of visual motion cues useful for object motion detection during self-motion and for spatial navigation, while the more anterior parts are implicated in visuomotor control of limb actions. The anterior POs is mainly involved in using the spotlight of attention to guide reach-to-grasp hand movements, especially in dynamic environments. The anterior part of the caudal SPL plays a central role in visually guided locomotion, being implicated in controlling leg-related movements as well as the four limbs interaction with the environment, and in encoding egomotion-compatible optic flow. Together, these functions reveal how the caudal SPL is strongly implicated in skilled visually-guided behaviors

    Horizontal target size perturbations during grasping movements are described by subsequent size perception and saccade amplitude

    Get PDF
    : Perception and action are essential in our day-to-day interactions with the environment. Despite the dual-stream theory of action and perception, it is now accepted that action and perception processes interact with each other. However, little is known about the impact of unpredicted changes of target size during grasping actions on perception. We assessed whether size perception and saccade amplitude were affected before and after grasping a target that changed its horizontal size during the action execution under the presence or absence of tactile feedback. We have tested twenty-one participants in 4 blocks of 30 trials. Blocks were divided into two experimental tactile feedback paradigms: tactile and non-tactile. Trials consisted of 3 sequential phases: pre-grasping size perception, grasping, and post-grasping size perception. During pre- and post-phases, participants executed a saccade towards a horizontal bar and performed a manual size estimation of the bar size. During grasping phase, participants were asked to execute a saccade towards the bar and to make a grasping action towards the screen. While grasping, 3 horizontal size perturbation conditions were applied: non-perturbation, shortening, and lengthening. 30% of the trials presented perturbation, meaning a symmetrically shortened or lengthened by 33% of the original size. Participants' hand and eye positions were assessed by a motion capture system and a mobile eye-tracker, respectively. After grasping, in both tactile and non-tactile feedback paradigms, size estimation was significantly reduced in lengthening (p = 0.002) and non-perturbation (p<0.001), whereas shortening did not induce significant adjustments (p = 0.86). After grasping, saccade amplitude became significantly longer in shortening (p<0.001) and significantly shorter in lengthening (p<0.001). Non-perturbation condition did not display adjustments (p = 0.95). Tactile feedback did not generate changes in the collected perceptual responses, but horizontal size perturbations did so, suggesting that all relevant target information used in the movement can be extracted from the post-action target perception

    Decoding sensorimotor information from superior parietal lobule of macaque via Convolutional Neural Networks

    Get PDF
    Despite the well-recognized role of the posterior parietal cortex (PPC) in processing sensory information to guide action, the differential encoding properties of this dynamic processing, as operated by different PPC brain areas, are scarcely known. Within the monkey's PPC, the superior parietal lobule hosts areas V6A, PEc, and PE included in the dorso-medial visual stream that is specialized in planning and guiding reaching movements. Here, a Convolutional Neural Network (CNN) approach is used to investigate how the information is processed in these areas. We trained two macaque monkeys to perform a delayed reaching task towards 9 positions (distributed on 3 different depth and direction levels) in the 3D peripersonal space. The activity of single cells was recorded from V6A, PEc, PE and fed to convolutional neural networks that were designed and trained to exploit the temporal structure of neuronal activation patterns, to decode the target positions reached by the monkey. Bayesian Optimization was used to define the main CNN hyper-parameters. In addition to discrete positions in space, we used the same network architecture to decode plausible reaching trajectories. We found that data from the most caudal V6A and PEc areas outperformed PE area in the spatial position decoding. In all areas, decoding accuracies started to increase at the time the target to reach was instructed to the monkey, and reached a plateau at movement onset. The results support a dynamic encoding of the different phases and properties of the reaching movement differentially distributed over a network of interconnected areas. This study highlights the usefulness of neurons' firing rate decoding via CNNs to improve our understanding of how sensorimotor information is encoded in PPC to perform reaching movements. The obtained results may have implications in the perspective of novel neuroprosthetic devices based on the decoding of these rich signals for faithfully carrying out patient's intentions.(C) 2022 Published by Elsevier Ltd

    A common neural substrate for processing scenes and egomotion-compatible visual motion

    Get PDF
    Neuroimaging studies have revealed two separate classes of category-selective regions specialized in optic flow (egomotion-compatible) processing and in scene/place perception. Despite the importance of both optic flow and scene/place recognition to estimate changes in position and orientation within the environment during self-motion, the possible functional link between egomotion- and scene-selective regions has not yet been established. Here we reanalyzed functional magnetic resonance images from a large sample of participants performing two well-known \u201clocalizer\u201d fMRI experiments, consisting in passive viewing of navigationally relevant stimuli such as buildings and places (scene/place stimulus) and coherently moving fields of dots simulating the visual stimulation during self-motion (flow fields). After interrogating the egomotion-selective areas with respect to the scene/place stimulus and the scene-selective areas with respect to flow fields, we found that the egomotion-selective areas V6+ and pIPS/V3A responded bilaterally more to scenes/places compared to faces, and all the scene-selective areas (parahippocampal place area or PPA, retrosplenial complex or RSC, and occipital place area or OPA) responded more to egomotion-compatible optic flow compared to random motion. The conjunction analysis between scene/place and flow field stimuli revealed that the most important focus of common activation was found in the dorsolateral parieto-occipital cortex, spanning the scene-selective OPA and the egomotion-selective pIPS/V3A. Individual inspection of the relative locations of these two regions revealed a partial overlap and a similar response profile to an independent low-level visual motion stimulus, suggesting that OPA and pIPS/V3A may be part of a unique motion-selective complex specialized in encoding both egomotion- and scene-relevant information, likely for the control of navigation in a structured environment

    Machine learning methods detect arm movement impairments in a patient with parieto-occipital lesion using only early kinematic information

    Get PDF
    Patients with lesions of the parieto-occipital cortex typically misreach visual targets that they correctly perceive (optic ataxia). Although optic ataxia was described more than 30 years ago, distinguishing this condition from physiological behavior using kinematic data is still far from being an achievement. Here, combining kinematic analysis with machine learning methods, we compared the reaching performance of a patient with bilateral occipitoparietal damage with that of 10 healthy controls. They performed visually guided reaches toward targets located at different depths and directions. Using the horizontal, sagittal, and vertical deviation of the trajectories, we extracted classification accuracy in discriminating the reaching performance of patient from that of controls. Specifically, accurate predictions of the patient's deviations were detected after the 20% of the movement execution in all the spatial positions tested. This classification based on initial trajectory decoding was possible for both directional and depth components of the movement, suggesting the possibility of applying this method to characterize pathological motor behavior in wider frameworks

    Motor decoding from the posterior parietal cortex using deep neural networks

    Get PDF
    Objective. Motor decoding is crucial to translate the neural activity for brain-computer interfaces (BCIs) and provides information on how motor states are encoded in the brain. Deep neural networks (DNNs) are emerging as promising neural decoders. Nevertheless, it is still unclear how different DNNs perform in different motor decoding problems and scenarios, and which network could be a good candidate for invasive BCIs. Approach. Fully-connected, convolutional, and recurrent neural networks (FCNNs, CNNs, RNNs) were designed and applied to decode motor states from neurons recorded from V6A area in the posterior parietal cortex (PPC) of macaques. Three motor tasks were considered, involving reaching and reach-to-grasping (the latter under two illumination conditions). DNNs decoded nine reaching endpoints in 3D space or five grip types using a sliding window approach within the trial course. To evaluate decoders simulating a broad variety of scenarios, the performance was also analyzed while artificially reducing the number of recorded neurons and trials, and while performing transfer learning from one task to another. Finally, the accuracy time course was used to analyze V6A motor encoding. Main results. DNNs outperformed a classic Naive Bayes classifier, and CNNs additionally outperformed XGBoost and Support Vector Machine classifiers across the motor decoding problems. CNNs resulted the top-performing DNNs when using less neurons and trials, and task-to-task transfer learning improved performance especially in the low data regime. Lastly, V6A neurons encoded reaching and reach-to-grasping properties even from action planning, with the encoding of grip properties occurring later, closer to movement execution, and appearing weaker in darkness. Significance. Results suggest that CNNs are effective candidates to realize neural decoders for invasive BCIs in humans from PPC recordings also reducing BCI calibration times (transfer learning), and that a CNN-based data-driven analysis may provide insights about the encoding properties and the functional roles of brain regions
    corecore